29 research outputs found

    GALP: A hybrid artificial intelligence algorithm for generating covering array

    Get PDF
    Today, there are a lot of useful algorithms for covering array (CA) generation, one of the branches of combinatorial testing. The major CA challenge is the generation of an array with the minimum number of test cases (efficiency) in an appropriate run-time (performance), for large systems. CA generation strategies are classified into several categories: computational and meta-heuristic, to name the most important ones. Generally, computational strategies have high performance and yield poor results in terms of efficiency, in contrast, meta-heuristic strategies have good efficiency and lower performance. Among the strategies available, some are efficient strategies but suffer from low performance; conversely, some others have good performance, but is not such efficient. In general, there is not a strategy that enjoys both above-mentioned metrics. In this paper, it is tried to combine the genetic algorithm and the Augmented Lagrangian Particle Swarm Optimization with Fractional Order Velocity to produce the appropriate test suite in terms of efficiency and performance. Also, a simple and effective minimizing function is employed to increase efficiency. The evaluation results show that the proposed strategy outperforms the existing approaches in terms of both efficiency and performance

    Using memetic algorithm for robustness testing of contract-based software models

    Get PDF
    Graph Transformation System (GTS) can formally specify the behavioral aspects of complex systems through graph-based contracts. Test suite generation under normal conditions from GTS specifications is a task well-suited to evolutionary algorithms such as Genetic and Particle Swarm Optimization (PSO) metaheuristics. However, testing the vulnerabilities of a system under unexpected events such as invalid inputs is essential. Furthermore, the mentioned global search algorithms tend to make big jumps in the system’s state-space that are not concentrated on particular test goals. In this paper, we extend the HGAPSO approach into a cost-aware Memetic Algorithm (MA) by making small local changes through a proposed local search operator to optimize coverage score and testing costs. Moreover, we test GTS specifications not only under normal events but also under unexpected situations. So, three coverage-based testing strategies are investigated, including normal testing, robustness testing, and a hybrid strategy. The effectiveness of the proposed test generation algorithm and the testing strategies are evaluated through a type of mutation analysis at the model-level. Our experimental results show that (1) the hybrid testing strategy outperforms normal and robustness testing strategies in terms of fault-detection capability, (2) the robustness testing is the most cost-efficient strategy, and (3) the proposed MA with the hybrid testing strategy outperforms the state-of-the-art global search algorithms

    Performance Modeling and Analysis of Software Architectures Specified Through Graph Transformations

    Get PDF
    Software architecture plays an important role in the success of modern, large and distributed software systems. For many of the software systems -- especially safety-critical ones -- it is important to specify their architectures using formal modeling notations. In this case, it is possible to assess different functional and non-functional properties on the designed models. Graph Transformation System (GTS) is a formal yet understandable language which is suitable for architectural modeling. Most of the existing works done on architectural modeling and analysis by GTS are concentrated on functional aspects, while for many systems it is crucial to consider non-functional aspects for modeling and analysis at the architectural level. In this paper, we present an approach to performance analysis of software architectures specified through GTS. To do so, we first enrich the existing architectural style -- specified through GTS - with performance information. Then, the performance models are generated in PEPA (Performance Evaluation Process Algebra) -- a formal language based on the stochastic process algebra -- using the enriched GTS models. Finally, we analyze different features like throughput, utilization of different software components, etc. on the generated performance models. All the main concepts are illustrated through a case study

    Using deep reinforcement learning to search reachability properties in systems specified through graph transformation

    Get PDF
    Today, model checking is one of the essential techniques in the verification of software systems. This technique can verify some properties such as reachability in which the entire state space is searched to find the desired state. However, model checking may lead to the state space explosion problem in which all states cannot be generated due to the exponential resource usage. Although the results of recent model checking approaches are promising, there is still room for improvement in terms of accuracy and the number of explored states. In this paper, using deep reinforcement learning and two neural networks, we propose an approach to increase the accuracy of the generated witnesses and reduce the use of hardware resources. In this approach, at first, an agent starts to explore the state space without any knowledge and gradually identifies the proper and improper actions by receiving different rewards/penalties from the environment to achieve the goal. Once the dataset is fulfilled with the agent's experiences, two neural networks evaluate the quality of each operation in each state, and afterwards, the best action is selected. The significant difficulties and challenges in the implementation are encoding the states, feature engineering, feature selection, reward engineering, handling invalid actions, and configuring the neural network. Finally, the proposed approach has been implemented in the Groove toolset, and as a result, in most of the case studies, it overcame the problem of state space explosion. Also, this approach outperforms the existing solutions in terms of generating shorter witnesses and exploring fewer states. On average, the proposed approach is nearly 400% better than other approaches in exploring fewer states and 300% better than the others in generating shorter witnesses. Also, on average, the proposed approach is 37% more accurate than the others in terms of finding the goals state

    Using Bayesian optimization algorithm for model-based integration testing

    Get PDF

    An MDA-Based Modeling and Design of Service Oriented Architecture

    Full text link
    Abstract. Traditional approaches to software systems development such as using tools and modeling frameworks are appropriate for building individual object oriented or component based software. However they are not suitable for designing of flexible distributed enterprise systems and open environments. In recent years, service-oriented architecture (SOA) has been proposed as a suitable architecture for development of such systems. Most current approaches in employing SOA are tailored to specific domains and hence are not general purpose. Therefore, in order to gain the full benefits of such technology, a more effective general approach to modeling and designing these complex distributed systems is required. In this paper, we present a model-driven approach to SOA modeling and designing complex distributed systems. In this approach, first the PIM of the business system is derived and expressed in standard UML modeling constructs and then this PIM is transformed to the SOA-based PIM by some transforming tool. After the SOA-based PIM is obtained, it can be used to generate PSM for a specific platform such as Web Services, Jini or other platforms. To make it clear how this PSM could be generated we will use Web Services as a target platform and the steps of this transformation will be shown.

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    Style-based modeling and verification of fault tolerance service oriented architectures

    Get PDF
    AbstractService-Oriented Architecture (SOA) is a flexible, loosely coupled and dynamic architecture for developing different distributed systems. Since using this architecture is every day increasing in the design of software systems, creating dependable services in this architecture is one of the main challenges. Considering different QoS levels consisting of non-functional aspects like security, safety; accessibility, etc is necessary for dependable systems. One of these non-functional aspects is fault tolerance. In this paper, in order to have a fault tolerance system, initially SOA core style has been extended using required parameters. Then, different communication and reconfiguration mechanisms of fault tolerance have been developed by graph transformation rules. Finally, the proposed model has been verified using model checking techniques available for graph transformation systems

    A comprehensive review of the security flaws of hashing algorithms

    No full text
    The blockchain is an emerging technology. It is widely used because of its efficiency and functionality. The hash function, as a supporting aspect of the data structure, is critical for assuring the blockchain's availability and security. Hash functions, which were originally designed for use in a few cryptographic schemes with specific security needs, have since become regular fare for many developers and protocol designers, who regard them as black boxes with magical characteristics. Message digesting, password verification, data structures, compiler operation and linking file name and path together are contemporary examples of hash functions applications. Since 2004, we've observed an exponential increase in the number and power of attacks against standard hash algorithms. In this paper, we investigated reported security flaws on well-known hashing algorithms and determined which of them are broken. A hash function is said to be broken when an attack is found, which, by exploiting special details of how the hash function operates, finds a preimage, a second preimage or a collision faster than the corresponding generic attack. To increase background knowledge, we also provide a summary of the types of attacks in this area. Finally, we summarized the information of the broken hash algorithms in a table which is very helpful for selecting, designing or using blockchains

    Educational Advisor System Implemented by Web-Based Fuzzy Expert Systems

    No full text
    corecore